Computational Models with No Linear Speedup

نویسندگان

  • Amir M. Ben-Amram
  • Niels H. Christensen
  • Jakob Grue Simonsen
چکیده

The linear speedup theorem states, informally, that constants do not matter: It is essentially always possible to find a program solving any decision problem a factor of 2 faster. This result is a classical theorem in computing, but also one of the most debated. The main ingredient of the typical proof of the linear speedup theorem is tape compression, where a fast machine is constructed with tape alphabet or number of tapes far greater than that of the original machine. In this paper, we prove that limiting Turing machines to a fixed alphabet and a fixed number of tapes rules out linear speedup. Specifically, we describe a language that can be recognized in linear time (e. g., 1.51n), and provide a proof, based on Kolmogorov complexity, that the computation cannot be sped up (e. g., below 1.49n). Without the tape and alphabet limitation, the linear speedup theorem does hold and yields machines of time complexity of the form (1+ ε)n for arbitrarily small ε > 0. Earlier results negating linear speedup in alternative models of computation have often been based on the existence of very efficient universal machines. In the vernacular of programming language theory: These models have very efficient self-interpreters. As the second contribution of this paper, we define a class, PICSTI, of computation models that exactly captures this property, and we disprove the Linear Speedup Theorem for every model in this class, thus generalizing all similar, model-specific proofs. ∗Jakob Grue Simonsen is partially supported by the Danish Council for Independent Research Sapere Aude grant “Complexity through Logic and Algebra” (COLA).

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A generalized implicit enumeration algorithm for a class of integer nonlinear programming problems

Presented here is a generalization of the implicit enumeration algorithm that can be applied when the objec-tive function is being maximized and can be rewritten as the difference of two non-decreasing functions. Also developed is a computational algorithm, named linear speedup, to use whatever explicit linear constraints are present to speedup the search for a solution. The method is easy to u...

متن کامل

Parallel algorithms for large scale econometric models

In this paper we have developed algorithms to solve macroeconometric models with forward-looking variables based on Newton method for nonlinear systems of equations. The most difficult step for Newton methods represents the resolution of a large linear system for each iteration. Thus, we compare the performances resulted by solving this linear system using two iterative methods and the direct m...

متن کامل

Distributed Scheduling of Nonlinear Computational Loads

It is demonstrated that supra-linear (greater than linear) speedup is possible in processing distributed divisible computational loads where computation time is a nonlinear function of load size. This result is radically different from the traditional distributed processing of computational loads with linear processing complexity appearing in over 50 journal papers.

متن کامل

An efficient quantum algorithm for generative machine learning

A central task in the field of quantum computing is to find applications where quantum computer could provide exponential speedup over any classical computer [1–3]. Machine learning represents an important field with broad applications where quantum computer may offer significant speedup [4–8]. Several quantum algorithms for discriminative machine learning [9] have been found based on efficient...

متن کامل

Rolling Partial Prefix-Sums To Speedup Evaluation of Uniform and Affine Recurrence Equations

As multithreaded and reconfigurable logic architectures play an increasing role in high-performance computing (HPC), the scientific community is in need for new programming models for efficiently mapping existing applications to the new parallel platforms. In this paper, we show how we can effectively exploit tightly coupled fine-grained parallelism in architectures such as GPU and FPGA to spee...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Chicago J. Theor. Comput. Sci.

دوره 2012  شماره 

صفحات  -

تاریخ انتشار 2012